Machine learning (ML) models can leak information about users, and differential privacy (DP) provides a rigorous way to bound that leakage under a given budget. This DP budget can be regarded as a new type of compute resource in workloads of multiple ML models training on user data. Once it is used, the DP budget is forever consumed. Therefore, it is crucial to allocate it most efficiently to train as many models as possible. This paper presents the scheduler for privacy that optimizes for efficiency. We formulate privacy scheduling as a new type of multidimensional knapsack problem, called privacy knapsack, which maximizes DP budget efficiency. We show that privacy knapsack is NP-hard, hence practical algorithms are necessarily approximate. We develop an approximation algorithm for privacy knapsack, DPK, and evaluate it on microbenchmarks and on a new, synthetic private-ML workload we developed from the Alibaba ML cluster trace. We show that DPK: (1) often approaches the efficiency-optimal schedule, (2) consistently schedules more tasks compared to a state-of-the-art privacy scheduling algorithm that focused on fairness (1.3-1.7x in Alibaba, 1.0-2.6x in microbenchmarks), but (3) sacrifices some level of fairness for efficiency. Therefore, using DPK, DP ML operators should be able to train more models on the same amount of user data while offering the same privacy guarantee to their users.
translated by 谷歌翻译
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to normbounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
translated by 谷歌翻译
尽管人工智能(AI)有望支持医疗保健提供者并提高医疗诊断的准确性,但数据集组成的缺乏透明度会使AI模型暴露于无意识和可避免的错误的可能性。特别是,皮肤病学条件的公共图像数据集很少包含有关肤色的信息。作为提高透明度的开始,AI研究人员已经从患者光敏性的度量到估算计算机视觉应用算法审核的肤色估算肤色(包括面部识别和皮肤病学诊断)的肤色估算肤色的度量来使用Fitzpatrick皮肤类型(FST)。为了了解图像上估计的FST注释的可变性,我们比较了来自教科书和在线皮肤病学试图的460张皮肤条件图像的多种FST注释方法。我们发现,三位经过董事会认证的皮肤科医生之间的评估者间可靠性与经过董事会认证的皮肤科医生和两种众包方法之间的评估者间可靠性相媲美。相比之下,我们发现转换为FST(ITA-FST)方法的单个类型学角度与专家注释相比,与专家的注释相关的注释相关的注释明显少于彼此相关。这些结果表明,基于ITA-FST的算法对于注释大规模图像数据集并不可靠,但是以人为本的,基于人群的协议可以可靠地将皮肤类型透明度添加到皮肤病学数据集中。此外,我们介绍了具有可调参数的动态共识协议的概念,包括专家审查,以提高人群的可见性并为未来的大型图像数据集的众包注释提供指导。
translated by 谷歌翻译
我们在这里提供与自然语言理解和自然语言推断相关的任务的数据集。DataSet包含来自三个域的自然语言的逻辑谜题:比较拼图,膝关节和刀柄和斑马拼图。每个拼图与整个原子问题组相关联,这些原子问题可以基于文本中发生的关系和个人生成。对于每个问题,我们提供正确的答案:有关,矛盾或歧义。答案的正确性是针对定理普通的验证。良好的谜题有两个属性:(i)每条信息是必要的,并且(ii)没有提供不必要的信息。这些属性使得机器理解任务的有趣候选人的难题。
translated by 谷歌翻译
我们研究多个代理商在多目标环境的同时学习的问题。具体来说,我们考虑两种药剂重复播放一个多目标的正常形式的游戏。在这样的游戏,从联合行动所产生的收益都向量值。以基于效用的方法,我们假设效用函数存在映射向量标公用事业和考虑旨在最大限度地提高预期收益载体的效用代理。作为代理商不一定知道他们的对手的效用函数或策略,他们必须学会互动的最佳策略对方。为了帮助代理商在适当的解决办法到达,我们介绍四种新型偏好通信协议双方的合作以及自身利益的沟通。每一种方法描述了一个代理在他们的行动以及如何另一代理响应通信偏好的特定协议。这些协议是一组对不沟通基线代理5个标杆游戏随后对其进行评估。我们发现,偏好通信可以彻底改变学习的过程,并导致其没有在此设置先前观测环纳什均衡的出现。另外,还要在那里代理商必须学会当通信的通信方案。对于与纳什均衡游戏的代理,我们发现通信可以是有益的,但很难知道什么时候剂有不同的最佳平衡。如果不是这种情况,代理变得冷漠通信。在游戏没有纳什均衡,我们的结果表明,整个学习率的差异。当使用更快的学习者,我们观察到明确的沟通,在50%左右的时间变得越来越普遍,因为它可以帮助他们在学习的妥协联合政策。较慢的学生保留这种模式在较小的程度,但显示增加的冷漠。
translated by 谷歌翻译
超过30亿人缺乏护理皮肤病。AI诊断工具可能有助于早期皮肤癌检测;然而,大多数模型尚未在不同肤色或罕见疾病的图像上进行评估。为了解决这个问题,我们策划了多样化的皮肤科(DDI)DataSet - 这是一种具有不同皮肤色调的第一个公开的,病理证实的图像。我们展示了最先进的皮肤科AI模型在DDI上表现得很糟糕,ROC-AUC与模型的原始结果相比下降29-40%。我们发现暗肤色和罕见的疾病,在DDI数据集中提供良好,导致性能下降。此外,我们表明,无需多样化培训数据,我们表明最先进的强大培训方法无法纠正这些偏差。我们的研究结果确定了需要解决的皮肤病学AI中的重要弱点和偏见,以确保可靠应用于各种患者和所有疾病。
translated by 谷歌翻译